XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Christian Machens

 

Thursday 30th November 2017

 

Time: 4.00pm

 

Ground Floor Seminar Room

25 Howland Street, London, W1T 4JG

 

Efficient and balanced: Neural networks revamped.

We have come to think of neural networks from a bottom-up perspective.
Each neuron is characterized by an input/ output function, and a network's
computational abilities emerge as a property of the collective. While immensely
successful (see the recent deep-learning craze), this view has also created several
persistent puzzles in theoretical neuroscience. The first puzzle are spikes, which
have largely remained a nuisance, rather than a feature of neural systems.
The second puzzle is learning, which has been hard or impossible without
violating the constraints of local information flow. The third puzzle is robustness to
perturbations, which is a ubiquitous feature of real neural systems, but often
ignored in neural network models. I am going to argue that a resolution to these
puzzles comes from a top-down perspective. We make two key assumptions.
First, we assume that the effective output of a neural network can be extracted
via linear readouts from the population. Second, we assume that a network
seeks to bound the error on a given computation, and that each neuron's voltage
represents part of this global error. Spikes are fired to keep this error in check.
These assumptions yield efficient networks that exhibit irregular and asynchronous
spike trains, balance of excitatory and inhibitory currents, and robustness to
perturbations. I will discuss the implications of the theory, prospects for
experimental tests, and future challenges.